Discrete Poisson equation

From HandWiki
Short description: Finite difference equation

In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own right as a topic in discrete mathematics.

On a two-dimensional rectangular grid

Using the finite difference numerical method to discretize the 2-dimensional Poisson equation (assuming a uniform spatial discretization, [math]\displaystyle{ \Delta x=\Delta y }[/math]) on an m × n grid gives the following formula:[1] [math]\displaystyle{ ( {\nabla}^2 u )_{ij} = \frac{1}{\Delta x^2} (u_{i+1,j} + u_{i-1,j} + u_{i,j+1} + u_{i,j-1} - 4 u_{ij}) = g_{ij} }[/math] where [math]\displaystyle{ 2 \le i \le m-1 }[/math] and [math]\displaystyle{ 2 \le j \le n-1 }[/math]. The preferred arrangement of the solution vector is to use natural ordering which, prior to removing boundary elements, would look like: [math]\displaystyle{ \mathbf{u} = \begin{bmatrix} u_{11} , u_{21} , \ldots , u_{m1} , u_{12} , u_{22} , \ldots , u_{m2} , \ldots , u_{mn} \end{bmatrix}^\mathsf{T} }[/math]

This will result in an mn × mn linear system: [math]\displaystyle{ A\mathbf{u} = \mathbf{b} }[/math] where [math]\displaystyle{ A = \begin{bmatrix} ~D & -I & ~0 & ~0 & ~0 & \cdots & ~0 \\ -I & ~D & -I & ~0 & ~0 & \cdots & ~0 \\ ~0 & -I & ~D & -I & ~0 & \cdots & ~0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ ~0 & \cdots & ~0 & -I & ~D & -I & ~0 \\ ~0 & \cdots & \cdots & ~0 & -I & ~D & -I \\ ~0 & \cdots & \cdots & \cdots & ~0 & -I & ~D \end{bmatrix}, }[/math]

[math]\displaystyle{ I }[/math] is the m × m identity matrix, and [math]\displaystyle{ D }[/math], also m × m, is given by:[2] [math]\displaystyle{ D = \begin{bmatrix} ~4 & -1 & ~0 & ~0 & ~0 & \cdots & ~0 \\ -1 & ~4 & -1 & ~0 & ~0 & \cdots & ~0 \\ ~0 & -1 & ~4 & -1 & ~0 & \cdots & ~0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ ~0 & \cdots & ~0 & -1 & ~4 & -1 & ~0 \\ ~0 & \cdots & \cdots & ~0 & -1 & ~4 & -1 \\ ~0 & \cdots & \cdots & \cdots & ~0 & -1 & ~4 \end{bmatrix}, }[/math] and [math]\displaystyle{ \mathbf{b} }[/math] is defined by [math]\displaystyle{ \mathbf{b} = -\Delta x^2 \begin{bmatrix} g_{11} , g_{21} , \ldots , g_{m1} , g_{12} , g_{22} , \ldots , g_{m2} , \ldots , g_{mn} \end{bmatrix}^\mathsf{T}. }[/math]

For each [math]\displaystyle{ u_{ij} }[/math] equation, the columns of [math]\displaystyle{ D }[/math] correspond to a block of [math]\displaystyle{ m }[/math] components in [math]\displaystyle{ u }[/math]: [math]\displaystyle{ \begin{bmatrix} u_{1j} , & u_{2j} , & \ldots, & u_{i-1,j} , & u_{ij} , & u_{i+1,j} , & \ldots , & u_{mj} \end{bmatrix}^\mathsf{T} }[/math] while the columns of [math]\displaystyle{ I }[/math] to the left and right of [math]\displaystyle{ D }[/math] each correspond to other blocks of [math]\displaystyle{ m }[/math] components within [math]\displaystyle{ u }[/math]: [math]\displaystyle{ \begin{bmatrix} u_{1,j-1} , & u_{2,j-1} , & \ldots, & u_{i-1,j-1} , & u_{i,j-1} , & u_{i+1,j-1} , & \ldots , & u_{m,j-1} \end{bmatrix}^\mathsf{T} }[/math] and [math]\displaystyle{ \begin{bmatrix} u_{1,j+1} , & u_{2,j+1} , & \ldots, & u_{i-1,j+1} , & u_{i,j+1} , & u_{i+1,j+1} , & \ldots , & u_{m,j+1} \end{bmatrix}^\mathsf{T} }[/math]

respectively.

From the above, it can be inferred that there are [math]\displaystyle{ n }[/math] block columns of [math]\displaystyle{ m }[/math] in [math]\displaystyle{ A }[/math]. It is important to note that prescribed values of [math]\displaystyle{ u }[/math] (usually lying on the boundary) would have their corresponding elements removed from [math]\displaystyle{ I }[/math] and [math]\displaystyle{ D }[/math]. For the common case that all the nodes on the boundary are set, we have [math]\displaystyle{ 2 \le i \le m - 1 }[/math] and [math]\displaystyle{ 2 \le j \le n - 1 }[/math], and the system would have the dimensions (m − 2)(n − 2) × (m− 2)(n − 2), where [math]\displaystyle{ D }[/math] and [math]\displaystyle{ I }[/math] would have dimensions (m − 2) × (m − 2).

Example

For a 3×3 ( [math]\displaystyle{ m = 3 }[/math] and [math]\displaystyle{ n = 3 }[/math] ) grid with all the boundary nodes prescribed, the system would look like: [math]\displaystyle{ \begin{bmatrix} U \end{bmatrix} = \begin{bmatrix} u_{22}, u_{32}, u_{42}, u_{23}, u_{33}, u_{43}, u_{24}, u_{34}, u_{44} \end{bmatrix}^\mathsf{T} }[/math] with [math]\displaystyle{ A = \left[\begin{array}{ccc|ccc|ccc} ~4 & -1 & ~0 & -1 & ~0 & ~0 & ~0 & ~0 & ~0 \\ -1 & ~4 & -1 & ~0 & -1 & ~0 & ~0 & ~0 & ~0 \\ ~0 & -1 & ~4 & ~0 & ~0 & -1 & ~0 & ~0 & ~0 \\ \hline -1 & ~0 & ~0 & ~4 & -1 & ~0 & -1 & ~0 & ~0 \\ ~0 & -1 & ~0 & -1 & ~4 & -1 & ~0 & -1 & ~0 \\ ~0 & ~0 & -1 & ~0 & -1 & ~4 & ~0 & ~0 & -1 \\ \hline ~0 & ~0 & ~0 & -1 & ~0 & ~0 & ~4 & -1 & ~0 \\ ~0 & ~0 & ~0 & ~0 & -1 & ~0 & -1 & ~4 & -1 \\ ~0 & ~0 & ~0 & ~0 & ~0 & -1 & ~0 & -1 & ~4 \end{array}\right] }[/math] and

[math]\displaystyle{ \mathbf{b} = \left[\begin{array}{l} -\Delta x^2 g_{22} + u_{12} + u_{21} \\ -\Delta x^2 g_{32} + u_{31} ~~~~~~~~ \\ -\Delta x^2 g_{42} + u_{52} + u_{41} \\ -\Delta x^2 g_{23} + u_{13} ~~~~~~~~ \\ -\Delta x^2 g_{33} ~~~~~~~~~~~~~~~~ \\ -\Delta x^2 g_{43} + u_{53} ~~~~~~~~ \\ -\Delta x^2 g_{24} + u_{14} + u_{25} \\ -\Delta x^2 g_{34} + u_{35} ~~~~~~~~ \\ -\Delta x^2 g_{44} + u_{54} + u_{45} \end{array}\right]. }[/math]

As can be seen, the boundary [math]\displaystyle{ u }[/math]'s are brought to the right-hand-side of the equation.[3] The entire system is 9 × 9 while [math]\displaystyle{ D }[/math] and [math]\displaystyle{ I }[/math] are 3 × 3 and given by: [math]\displaystyle{ D = \begin{bmatrix} ~4 & -1 & ~0 \\ -1 & ~4 & -1 \\ ~0 & -1 & ~4 \\ \end{bmatrix} }[/math] and [math]\displaystyle{ -I = \begin{bmatrix} -1 & ~0 & ~0 \\ ~0 & -1 & ~0 \\ ~0 & ~0 & -1 \end{bmatrix}. }[/math]

Methods of solution

Because [math]\displaystyle{ \begin{bmatrix} A \end{bmatrix} }[/math] is block tridiagonal and sparse, many methods of solution have been developed to optimally solve this linear system for [math]\displaystyle{ \begin{bmatrix} U \end{bmatrix} }[/math]. Among the methods are a generalized Thomas algorithm with a resulting computational complexity of [math]\displaystyle{ O(n^{2.5}) }[/math], cyclic reduction, successive overrelaxation that has a complexity of [math]\displaystyle{ O(n^{1.5}) }[/math], and Fast Fourier transforms which is [math]\displaystyle{ O(n \log(n)) }[/math]. An optimal [math]\displaystyle{ O(n) }[/math] solution can also be computed using multigrid methods.[4]

Poisson convergence of various iterative methods with infinity norms of residuals against iteration count and computer time.

Applications

In computational fluid dynamics, for the solution of an incompressible flow problem, the incompressibility condition acts as a constraint for the pressure. There is no explicit form available for pressure in this case due to a strong coupling of the velocity and pressure fields. In this condition, by taking the divergence of all terms in the momentum equation, one obtains the pressure poisson equation.

For an incompressible flow this constraint is given by: [math]\displaystyle{ \frac{ \partial v_x }{ \partial x} + \frac{ \partial v_y }{ \partial y} + \frac{\partial v_z}{\partial z} = 0 }[/math] where [math]\displaystyle{ v_x }[/math] is the velocity in the [math]\displaystyle{ x }[/math] direction, [math]\displaystyle{ v_y }[/math] is velocity in [math]\displaystyle{ y }[/math] and [math]\displaystyle{ v_z }[/math] is the velocity in the [math]\displaystyle{ z }[/math] direction. Taking divergence of the momentum equation and using the incompressibility constraint, the pressure Poisson equation is formed given by: [math]\displaystyle{ \nabla^2 p = f(\nu, V) }[/math] where [math]\displaystyle{ \nu }[/math] is the kinematic viscosity of the fluid and [math]\displaystyle{ V }[/math] is the velocity vector.[5]

The discrete Poisson's equation arises in the theory of Markov chains. It appears as the relative value function for the dynamic programming equation in a Markov decision process, and as the control variate for application in simulation variance reduction.[6][7][8]

Footnotes

  1. Hoffman, Joe (2001), "Chapter 9. Elliptic partial differential equations", Numerical Methods for Engineers and Scientists (2nd ed.), McGraw–Hill, ISBN 0-8247-0443-6 .
  2. Golub, Gene H. and C. F. Van Loan, Matrix Computations, 3rd Ed., The Johns Hopkins University Press, Baltimore, 1996, pages 177–180.
  3. Cheny, Ward and David Kincaid, Numerical Mathematics and Computing 2nd Ed., Brooks/Cole Publishing Company, Pacific Grove, 1985, pages 443–448.
  4. CS267: Notes for Lectures 15 and 16, Mar 5 and 7, 1996, https://people.eecs.berkeley.edu/~demmel/cs267/lecture24/lecture24.html
  5. Fletcher, Clive A. J., Computational Techniques for Fluid Dynamics: Vol I, 2nd Ed., Springer-Verlag, Berlin, 1991, page 334–339.
  6. S. P. Meyn and R.L. Tweedie, 2005. Markov Chains and Stochastic Stability. Second edition to appear, Cambridge University Press, 2009.
  7. S. P. Meyn, 2007. Control Techniques for Complex Networks, Cambridge University Press, 2007.
  8. Asmussen, Søren, Glynn, Peter W., 2007. "Stochastic Simulation: Algorithms and Analysis". Springer. Series: Stochastic Modelling and Applied Probability, Vol. 57, 2007.

References